A learning-based framework for representation of domain-specific images isproposed where joint compression and denoising can be done using a VQ-basedmulti-layer network. While it learns to compress the images from a trainingset, the compression performance is very well generalized on images from a testset. Moreover, when fed with noisy versions of the test set, since it haspriors from clean images, the network also efficiently denoises the test imagesduring the reconstruction. The proposed framework is a regularized version ofthe Residual Quantization (RQ) where at each stage, the quantization error fromthe previous stage is further quantized. Instead of codebook learning from thek-means which over-trains for high-dimensional vectors, we show that onlygenerating the codewords from a random, but properly regularized distributionsuffices to compress the images globally and without the need to resort topatch-based division of images. The experiments are done on the\textit{CroppedYale-B} set of facial images and the method is compared with theJPEG-2000 codec for compression and BM3D for denoising, showing promisingresults.
展开▼